Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40.272
Filter
1.
J Vis ; 24(4): 19, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38652657

ABSTRACT

Researchers increasingly use virtual reality (VR) to perform behavioral experiments, especially in vision science. These experiments are usually programmed directly in so-called game engines that are extremely powerful. However, this process is tricky and time-consuming as it requires solid knowledge of game engines. Consequently, the anticipated prohibitive effort discourages many researchers who want to engage in VR. This paper introduces the Perception Toolbox for Virtual Reality (PTVR) library, allowing visual perception studies in VR to be created using high-level Python script programming. A crucial consequence of using a script is that an experiment can be described by a single, easy-to-read piece of code, thus improving VR studies' transparency, reproducibility, and reusability. We built our library upon a seminal open-source library released in 2018 that we have considerably developed since then. This paper aims to provide a comprehensive overview of the PTVR software for the first time. We introduce the main objects and features of PTVR and some general concepts related to the three-dimensional (3D) world. This new library should dramatically reduce the difficulty of programming experiments in VR and elicit a whole new set of visual perception studies with high ecological validity.


Subject(s)
Software , Virtual Reality , Humans , Reproducibility of Results , Visual Perception/physiology , User-Computer Interface
2.
J Magn Reson ; 361: 107662, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38574458

ABSTRACT

The open-source console MaRCoS, which stands for "Magnetic Resonance Control System", combines hardware, firmware and software elements for integral control of Magnetic Resonance Imaging (MRI) scanners. Previous developments have focused on making the system robust and reliable, rather than on users, who have been somewhat overlooked. This work describes a Graphical User Interface (GUI) designed for intuitive control of MaRCoS, as well as compatibility with clinical environments. The GUI is based on an arrangement of tabs and a renewed Application Program Interface (API). Compared to the previous versions, the MaRGE package ("MaRCoS Graphical Environment") includes new functionalities such as the possibility to export images to standard DICOM formats, create and manage clinical protocols, or display and process image reconstructions, among other features conceived to simplify the operation of MRI scanners. All prototypes in our facilities are commanded by MaRCoS and operated with the new GUI. Here we report on its performance on an experimental 0.2 T scanner designed for hard-tissue, as well as a 72 mT portable scanner presently installed in the radiology department of a large hospital. The possibility to customize, adapt and streamline processes has substantially improved our workflows and overall experience.


Subject(s)
Software , User-Computer Interface , Computers , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted
3.
JCO Clin Cancer Inform ; 8: e2300187, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38657194

ABSTRACT

PURPOSE: Use of artificial intelligence (AI) in cancer care is increasing. What remains unclear is how best to design patient-facing systems that communicate AI output. With oncologist input, we designed an interface that presents patient-specific, machine learning-based 6-month survival prognosis information designed to aid oncology providers in preparing for and discussing prognosis with patients with advanced solid tumors and their caregivers. The primary purpose of this study was to assess patient and caregiver perceptions and identify enhancements of the interface for communicating 6-month survival and other prognosis information when making treatment decisions concerning anticancer and supportive therapy. METHODS: This qualitative study included interviews and focus groups conducted between November and December 2022. Purposive sampling was used to recruit former patients with cancer and/or former caregivers of patients with cancer who had participated in cancer treatment decisions from Utah or elsewhere in the United States. Categories and themes related to perceptions of the interface were identified. RESULTS: We received feedback from 20 participants during eight individual interviews and two focus groups, including four cancer survivors, 13 caregivers, and three representing both. Overall, most participants expressed positive perceptions about the tool and identified its value for supporting decision making, feeling less alone, and supporting communication among oncologists, patients, and their caregivers. Participants identified areas for improvement and implementation considerations, particularly that oncologists should share the tool and guide discussions about prognosis with patients who want to receive the information. CONCLUSION: This study revealed important patient and caregiver perceptions of and enhancements for the proposed interface. Originally designed with input from oncology providers, patient and caregiver participants identified additional interface design recommendations and implementation considerations to support communication about prognosis.


Subject(s)
Artificial Intelligence , Caregivers , Neoplasms , Humans , Caregivers/psychology , Neoplasms/psychology , Neoplasms/therapy , Prognosis , Female , Male , Middle Aged , Aged , Focus Groups , Adult , Qualitative Research , Communication , Perception , User-Computer Interface
4.
BMC Bioinformatics ; 25(1): 162, 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38658834

ABSTRACT

BACKGROUND: The results of high-throughput biology ('omic') experiments provide insight into biological mechanisms but can be challenging to explore, archive and share. The scale of these challenges continues to grow as omic research volume expands and multiple analytical technologies, bioinformatic pipelines, and visualization preferences have emerged. Multiple software applications exist that support omic study exploration and/or archival. However, an opportunity remains for open-source software that can archive and present the results of omic analyses with broad accommodation of study-specific analytical approaches and visualizations with useful exploration features. RESULTS: We present OmicNavigator, an R package for the archival, visualization and interactive exploration of omic studies. OmicNavigator enables bioinformaticians to create web applications that interactively display their custom visualizations and analysis results linked with app-derived analytical tools, graphics, and tables. Studies created with OmicNavigator can be viewed within an interactive R session or hosted on a server for shared access. CONCLUSIONS: OmicNavigator can be found at https://github.com/abbvie-external/OmicNavigator.


Subject(s)
Computational Biology , Software , Computational Biology/methods , User-Computer Interface , Computer Graphics
5.
BMC Genomics ; 25(1): 402, 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38658838

ABSTRACT

BACKGROUND: In recent years, Single-cell RNA sequencing (scRNA-seq) is increasingly accessible to researchers of many fields. However, interpreting its data demands proficiency in multiple programming languages and bioinformatic skills, which limited researchers, without such expertise, exploring information from scRNA-seq data. Therefore, there is a tremendous need to develop easy-to-use software, covering all the aspects of scRNA-seq data analysis. RESULTS: We proposed a clear analysis framework for scRNA-seq data, which emphasized the fundamental and crucial roles of cell identity annotation, abstracting the analysis process into three stages: upstream analysis, cell annotation and downstream analysis. The framework can equip researchers with a comprehensive understanding of the analysis procedure and facilitate effective data interpretation. Leveraging the developed framework, we engineered Shaoxia, an analysis platform designed to democratize scRNA-seq analysis by accelerating processing through high-performance computing capabilities and offering a user-friendly interface accessible even to wet-lab researchers without programming expertise. CONCLUSION: Shaoxia stands as a powerful and user-friendly open-source software for automated scRNA-seq analysis, offering comprehensive functionality for streamlined functional genomics studies. Shaoxia is freely accessible at http://www.shaoxia.cloud , and its source code is publicly available at https://github.com/WiedenWei/shaoxia .


Subject(s)
Sequence Analysis, RNA , Single-Cell Analysis , Software , Single-Cell Analysis/methods , Sequence Analysis, RNA/methods , Internet , Humans , Computational Biology/methods , RNA-Seq/methods , User-Computer Interface
6.
JMIR Hum Factors ; 11: e51522, 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38564261

ABSTRACT

BACKGROUND: More than 18 million cancer survivors are living in the United States. The effects of cancer and its treatments can have cognitive, psychological, physical, and social consequences that many survivors find incredibly disabling. Posttreatment support is often unavailable or underused, especially for survivors living with disabilities. This leaves them to deal with new obstacles and struggles on their own, oftentimes feeling lost during this transition. Mobile health (mHealth) interventions have been shown to effectively aid cancer survivors in dealing with many of the aftereffects of cancer and its treatments; these interventions hold immense potential for survivors living with disabilities. We developed a prototype for WeCanManage, an mHealth-delivered self-management intervention to empower cancer survivors living with disabilities through problem-solving, mindfulness, and self-advocacy training. OBJECTIVE: Our study conducted a heuristic evaluation of the WeCanManage high-fidelity prototype and assessed its usability among cancer survivors with known disabilities. METHODS: We evaluated the prototype using Nielsen's 10 principles of heuristic evaluation with 22 human-computer interaction university students. On the basis of the heuristic evaluation findings, we modified the prototype and conducted usability testing on 10 cancer survivors with a variety of known disabilities, examining effectiveness, efficiency, usability, and satisfaction, including a completion of the modified System Usability Scale (SUS). RESULTS: The findings from the heuristic evaluation were mostly favorable, highlighting the need for a help guide, addressing accessibility concerns, and enhancing the navigation experience. After usability testing, the average SUS score was 81, indicating a good-excellent design. The participants in the usability testing sample expressed positive reactions toward the app's design, educational content and videos, and the available means of connecting with others. They identified areas for improvement, such as improving accessibility, simplifying navigation within the community forums, and providing a more convenient method to access the help guide. CONCLUSIONS: Overall, usability testing showed positive results for the design of WeCanManage. The course content and features helped participants feel heard, understood, and less alone.


Subject(s)
Cancer Survivors , Mobile Applications , Neoplasms , Humans , User-Centered Design , Heuristics , User-Computer Interface , Power, Psychological , Neoplasms/therapy
7.
J Neuroeng Rehabil ; 21(1): 60, 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38654367

ABSTRACT

OBJECTIVE: The objective of this study was to evaluate users' driving performances with a Power Wheelchair (PWC) driving simulator in comparison to the same driving task in real conditions with a standard power wheelchair. METHODS: Three driving circuits of progressive difficulty levels (C1, C2, C3) that were elaborated to assess the driving performances with PWC in indoor situations, were used in this study. These circuits have been modeled in a 3D Virtual Environment to replicate the three driving task scenarios in Virtual Reality (VR). Users were asked to complete the three circuits with respect to two testing conditions during three successive sessions, i.e. in VR and on a real circuit (R). During each session, users completed the two conditions. Driving performances were evaluated using the number of collisions and time to complete the circuit. In addition, driving ability by Wheelchair Skill Test (WST) and mental load were assessed in both conditions. Cybersickness, user satisfaction and sense of presence were measured in VR. The conditions R and VR were randomized. RESULTS: Thirty-one participants with neurological disorders and expert wheelchair drivers were included in the study. The driving performances between VR and R conditions were statistically different for the C3 circuit but were not statistically different for the two easiest circuits C1 and C2. The results of the WST was not statistically different in C1, C2 and C3. The mental load was higher in VR than in R condition. The general sense of presence was reported as acceptable (mean value of 4.6 out of 6) for all the participants, and the cybersickness was reported as acceptable (SSQ mean value of 4.25 on the three circuits in VR condition). CONCLUSION: Driving performances were statistically different in the most complicated circuit C3 with an increased number of collisions in VR, but were not statistically different for the two easiest circuits C1 and C2 in R and VR conditions. In addition, there were no significant adverse effects such as cybersickness. The results show the value of the simulator for driving training applications. Still, the mental load was higher in VR than in R condition, thus mitigating the potential for use with people with cognitive disorders. Further studies should be conducted to assess the quality of skill transfer for novice drivers from the simulator to the real world. Trial registration Ethical approval n ∘ 2019-A001306-51 from Comité de Protection des Personnes Sud Mediterranée IV. Trial registered the 19/11/2019 on ClinicalTrials.gov in ID: NCT04171973.


Subject(s)
Wheelchairs , Humans , Pilot Projects , Male , Adult , Female , Middle Aged , Virtual Reality , Automobile Driving/psychology , Computer Simulation , User-Computer Interface , Psychomotor Performance/physiology , Aged , Young Adult , Nervous System Diseases/psychology
8.
Front Public Health ; 12: 1293621, 2024.
Article in English | MEDLINE | ID: mdl-38584921

ABSTRACT

Introduction: Falls are a major worldwide health problem in older people. Several physical rehabilitation programs with home-based technologies, such as the online DigiRehab platform, have been successfully delivered. The PRECISE project combines personalized training delivered through the application with an artificial intelligence-based predictive model (AI-DSS platform) for fall risk assessment. This new system, called DigiRehab, will enable early identification of significant risk factors for falling and propose an individualized physical training plan to attend to these critical areas. Methods: The study will test the usability of the DigiRehab platform in generating personalized physical rehabilitation programs at home. Fifty older adults participants will be involved, 20 of them testing the beta version prototype, and 30 participants testing the updated version afterwards. The inclusion criteria will be age ≥65, independent ambulation, fall risk (Tinetti test), Mini Mental State Examination ≥24, home residents, familiarity with web applications, ability and willingness to sign informed consent. Exclusion criteria will be unstable clinical condition, severe visual and/or hearing impairment, severe impairment in Activities of Daily Living and absence of primary caregiver. Discussion: The first part of the screening consists in a structured questionnaire of 10 questions regarding the user's limitations, including the risk of falling, while the second consists in 10 physical tests to assess the functional status. Based on the results, the program will help define the user's individual profile upon which the DSS platform will rate the risk of falling and design the personalized exercise program to be carried out at home. All measures from the initial screening will be repeated and the results will be used to optimize the predictive algorithms in order to prepare the tool in its final version. For the usability assessment, the System Usability Scale will be administered. The follow-up will take place after the 12-week intervention at home. A semi-structured satisfaction questionnaire will also be administered to verify whether the project will meet the needs of older adults and their family caregiver. Conclusion: We expect that personalized training prescribed by DigiRehab platform could help to reduce the need for care in older adults subjects and the care burden.Clinical trial registration: [https://clinicaltrials.gov/], identifier [NCT05846776].


Subject(s)
Accidental Falls , Activities of Daily Living , Aged , Humans , Accidental Falls/prevention & control , Artificial Intelligence , Europe , Feasibility Studies , Italy , User-Computer Interface , Clinical Trials as Topic
9.
Sci Rep ; 14(1): 8093, 2024 04 06.
Article in English | MEDLINE | ID: mdl-38582769

ABSTRACT

This study investigated brain responses during cybersickness in healthy adults using functional near-infrared spectroscopy (fNIRS). Thirty participants wore a head-mounted display and observed a virtual roller coaster scene that induced cybersickness. Cortical activation during the virtual roller coaster task was measured using fNIRS. Cybersickness symptoms were evaluated using a Simulator Sickness Questionnaire (SSQ) administered after the virtual rollercoaster. Pearson correlations were performed for cybersickness symptoms and the beta coefficients of hemodynamic responses. The group analysis of oxyhemoglobin (HbO) and total hemoglobin (HbT) levels revealed deactivation in the bilateral angular gyrus during cybersickness. In the Pearson correlation analyses, the HbO and HbT beta coefficients in the bilateral angular gyrus had a significant positive correlation with the total SSQ and disorientation. These results indicated that the angular gyrus was associated with cybersickness. These findings suggest that the hemodynamic response in the angular gyrus could be a biomarker for evaluating cybersickness symptoms.


Subject(s)
Motion Sickness , Adult , Humans , User-Computer Interface , Hemodynamics/physiology , Oxyhemoglobins , Brain
11.
ACS Sens ; 9(4): 1886-1895, 2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38529839

ABSTRACT

Smart gloves are often used in human-computer interaction scenarios due to their portability and ease of integration. However, their application in the field of information security has been less studied. Herein, we propose a smart glove using an iontronic capacitive sensor with significant pressure-sensing performance. Besides, an operator interface has been developed to match the smart glove, which is capable of multitasking integration of mouse movement, music playback, game control, and message typing in Internet chat rooms by capturing and encoding finger-tapping movements. In addition, by integrating machine learning, we can mine the characteristics of individual behavioral habits contained in the sensor signals and, based on this, achieve a deep binding of the user to the smart glove. The proposed smart glove can greatly facilitate people's lives, as well as explore a new strategy in research on the application of smart gloves in data security.


Subject(s)
Hydrogels , Machine Learning , Hydrogels/chemistry , Humans , Gloves, Protective , Computer Security , User-Computer Interface
12.
Comput Methods Programs Biomed ; 249: 108142, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38547688

ABSTRACT

BACKGROUND AND OBJECTIVES: Virtual training has emerged as an exceptionally effective approach for training healthcare practitioners in the field of vascular intervention surgery. By providing a simulated environment and blood vessel model that enables repeated practice, virtual training facilitates the acquisition of surgical skills in a safe and efficient manner for trainees. However, the current state of research in this area is characterized by limitations in the fidelity of blood vessel and guidewire models, which restricts the effectiveness of training. Additionally, existing approaches lack the necessary real-time responsiveness and precision, while the blood vessel models suffer from incompleteness and a lack of scientific rigor. METHODS: To address these challenges, this paper integrates position-based dynamics (PBD) and its extensions, shape matching, and Cosserat elastic rods. By combining these approaches within a unified particle framework, accurate and realistic deformation simulation of personalized blood vessel and guidewire models is achieved, thereby enhancing the training experience. Furthermore, a multi-level progressive continuous collision detection method, leveraging spatial hashing, is proposed to improve the accuracy and efficiency of collision detection. RESULTS: Our proposed blood vessel model demonstrated acceptable performance with the reduced deformation simulation response times of 7 ms, improving the real-time capability at least of 43.75 %. Experimental validation confirmed that the guidewire model proposed in this paper can dynamically adjust the density of its elastic rods to alter the degree of bending and torsion. It also exhibited a deformation process comparable to that of real guidewires, with an average response time of 6 ms. In the interaction of blood vessel and guidewire models, the simulator blood vessel model used for coronary vascular intervention training exhibited an average response time of 15.42 ms, with a frame rate of approximately 64 FPS. CONCLUSIONS: The method presented in this paper achieves deformation simulation of both vascular and guidewire models, demonstrating sufficient real-time performance and accuracy. The interaction efficiency between vascular and guidewire models is enhanced through the unified simulation framework and collision detection. Furthermore, it can be integrated with virtual training scenarios within the system, making it suitable for developing more advanced vascular interventional surgery training systems.


Subject(s)
Virtual Reality , Computer Simulation , User-Computer Interface
13.
IEEE Trans Vis Comput Graph ; 30(5): 2247-2256, 2024 May.
Article in English | MEDLINE | ID: mdl-38437075

ABSTRACT

Physical QWERTY keyboards are the current standard for performing precision text-entry with extended reality devices. Ideally, there would exist a comparable, self-contained solution that works anywhere, without requiring external keyboards. Unfortunately, when physical keyboards are recreated virtually, we currently lose critical haptic feedback information from the sense of touch, which impedes typing. In this paper, we introduce the MusiKeys Technique, which uses auditory feedback in virtual reality to communicate missing haptic feedback information typists normally receive when using a physical keyboard. To examine this concept, we conducted a user study with 24 participants which encompassed four mid-air virtual keyboards augmented with increasing amounts of feedback information, along with a fifth physical keyboard for reference. Results suggest that providing clicking feedback on key-press and key-release improves typing performance compared to not providing auditory feedback, which is consistent with the literature. We also found that audio can serve as a substitute for information contained in haptic feedback, in that users can accurately perceive the presented information. However, under our specific study conditions, this awareness of the feedback information did not yield significant differences in typing performance. Our results suggest this kind of feedback replacement can be perceived by users but needs more research to tune and improve the specific techniques.


Subject(s)
Haptic Technology , Touch Perception , Humans , Equipment Design , Computer Graphics , Touch , User-Computer Interface
14.
IEEE Trans Vis Comput Graph ; 30(5): 2206-2216, 2024 May.
Article in English | MEDLINE | ID: mdl-38437082

ABSTRACT

In Mixed Reality (MR), users' heads are largely (if not completely) occluded by the MR Head-Mounted Display (HMD) they are wearing. As a consequence, one cannot see their facial expressions and other communication cues when interacting locally. In this paper, we investigate how displaying virtual avatars' heads on-top of the (HMD-occluded) heads of participants in a Video See-Through (VST) Mixed Reality local collaborative task could improve their collaboration as well as social presence. We hypothesized that virtual heads would convey more communicative cues (such as eye direction or facial expressions) hidden by the MR HMDs and lead to better collaboration and social presence. To do so, we conducted a between-subject study ($\mathrm{n}=88$) with two independent variables: the type of avatar (CartoonAvatar/RealisticAvatar/NoAvatar) and the level of facial expressions provided (HighExpr/LowExpr). The experiment involved two dyadic communication tasks: (i) the "20-question" game where one participant asks questions to guess a hidden word known by the other participant and (ii) a urban planning problem where participants have to solve a puzzle by collaborating. Each pair of participants performed both tasks using a specific type of avatar and facial animation. Our results indicate that while adding an avatar's head does not necessarily improve social presence, the amount of facial expressions provided through the social interaction does have an impact. Moreover, participants rated their performance higher when observing a realistic avatar but rated the cartoon avatars as less uncanny. Taken together, our results contribute to a better understanding of the role of partial avatars in local MR collaboration and pave the way for further research exploring collaboration in different scenarios, with different avatar types or MR setups.


Subject(s)
Augmented Reality , 60453 , Humans , User-Computer Interface , Computer Graphics , Facial Expression
15.
IEEE Trans Vis Comput Graph ; 30(5): 2580-2590, 2024 May.
Article in English | MEDLINE | ID: mdl-38437094

ABSTRACT

VR exergames offer an engaging solution to combat sedentary behavior and promote physical activity. However, challenges emerge when playing these games in shared spaces, particularly due to the presence of bystanders. VR's passthrough functionality enables players to maintain awareness of their surrounding environment while immersed in VR gaming, rendering it a promising solution to improve users' awareness of the environment. This study investigates the passthrough's impact on player performance and experiences in shared spaces, involving an experiment with 24 participants that examines Space (Office vs. Corridor) and Passthrough Function (With vs. Without). Results reveal that Passthrough enhances game performance and environmental awareness while reducing immersion. Players prefer an open area to an enclosed room, whether with or without Passthrough, finding it more socially acceptable. Additionally, Passthrough appears to encourage participation among players with higher self-consciousness, potentially alleviating their concerns about being observed by bystanders. Our findings provide valuable insights for designing VR experiences in shared spaces, underscoring the potential of VR's passthrough to enhance user experiences and promote VR adoption in these environments.


Subject(s)
Exergaming , Virtual Reality , Humans , User-Computer Interface , Computer Graphics , Exercise
16.
IEEE Trans Vis Comput Graph ; 30(5): 2109-2118, 2024 May.
Article in English | MEDLINE | ID: mdl-38437112

ABSTRACT

The sense of embodiment in virtual reality (VR) is commonly understood as the subjective experience that one's physical body is substituted by a virtual counterpart, and is typically achieved when the avatar's body, seen from a first-person view, moves like one's physical body. Embodiment can also be experienced in other circumstances (e.g., in third-person view) or with imprecise or distorted visuo-motor coupling. It was moreover observed, in various cases of small or progressive temporal and spatial manipulations of avatars' movements, that participants may spontaneously follow the movement shown by the avatar. The present work investigates whether, in some specific contexts, participants would follow what their avatar does even when large movement discrepancies occur, thereby extending the scope of understanding of the self-avatar follower effect beyond subtle changes of motion or speed manipulations. We conducted an experimental study in which we introduced uncertainty about which movement to perform at specific times and analyzed participants' movements and subjective feedback after their avatar showed them an incorrect movement. Results show that, when in doubt, participants were influenced by their avatar's movements, leading them to perform that particular error twice more often than normal. Importantly, results of the embodiment score indicate that participants experienced a dissociation with their avatar at those times. Overall, these observations not only demonstrate the possibility of provoking situations in which participants follow the guidance of their avatar for large motor distortions, despite their awareness about the avatar movement disruption and on the possible influence it had on their choice, and, importantly, exemplify how the cognitive mechanism of embodiment is deeply rooted in the necessity of having a body.


Subject(s)
60453 , Virtual Reality , Humans , User-Computer Interface , Computer Graphics , Movement
17.
IEEE Trans Vis Comput Graph ; 30(5): 2434-2443, 2024 May.
Article in English | MEDLINE | ID: mdl-38437125

ABSTRACT

In many consumer virtual reality (VR) applications, users embody predefined characters that offer minimal customization options, frequently emphasizing storytelling over user choice. We explore whether matching a user's physical characteristics, specifically ethnicity and gender, with their virtual self-avatar affects their sense of embodiment in VR. We conducted a $2\times 2$ within-subjects experiment ($\mathrm{n}=32$) with a diverse user population to explore the impact of matching or not matching a user's self-avatar to their ethnicity and gender on their sense of embodiment. Our results indicate that matching the ethnicity of the user and their self-avatar significantly enhances sense of embodiment regardless of gender, extending across various aspects, including appearance, response, and ownership. We also found that matching gender significantly enhanced ownership, suggesting that this aspect is influenced by matching both ethnicity and gender. Interestingly, we found that matching ethnicity specifically affects self-location while matching gender specifically affects one's body ownership.


Subject(s)
60453 , Virtual Reality , Humans , Ethnicity , Shoes , User-Computer Interface , Computer Graphics
18.
IEEE Trans Vis Comput Graph ; 30(5): 2066-2076, 2024 May.
Article in English | MEDLINE | ID: mdl-38437132

ABSTRACT

Several studies have shown that users of immersive virtual reality can feel high levels of embodiment in self-avatars that have different morphological proportions than those of their actual bodies. Deformed and unrealistic morphological modifications are accepted by embodied users, underlying the adaptability of one's mental map of their body (body schema) in response to incoming sensory feedback. Before initiating a motor action, the brain uses the body schema to plan and sequence the necessary movements. Therefore, embodiment in a self-avatar with a different morphology, such as one with deformed proportions, could lead to changes in motor planning and execution. In this study, we aimed to measure the effects on movement planning and execution of embodying a self-avatar with an enlarged lower leg on one side. Thirty participants embodied an avatar without any deformations, and with an enlarged dominant or non-dominant leg, in randomized order. Two different levels of embodiment were induced, using synchronous or asynchronous visuotactile stimuli. In each condition, participants performed a gait initiation task. Their center of mass and center of pressure were measured, and the margin of stability (MoS) was computed from these values. Their perceived level of embodiment was also measured, using a validated questionnaire. Results show no significant changes on the biomechenical variables related to dynamic stability. Embodiment scores decreased with asynchronous stimuli, without impacting the measures related to stability. The body schema may not have been impacted by the larger virtual leg. However, deforming the self-avatar's morphology could have important implications when addressing individuals with impaired physical mobility by subtly influencing action execution during a rehabilitation protocol.


Subject(s)
60453 , Leg , Humans , User-Computer Interface , Computer Graphics , Brain
19.
Proc Natl Acad Sci U S A ; 121(13): e2314901121, 2024 Mar 26.
Article in English | MEDLINE | ID: mdl-38466880

ABSTRACT

Tactile perception of softness serves a critical role in the survival, well-being, and social interaction among various species, including humans. This perception informs activities from food selection in animals to medical palpation for disease detection in humans. Despite its fundamental importance, a comprehensive understanding of how softness is neurologically and cognitively processed remains elusive. Previous research has demonstrated that the somatosensory system leverages both cutaneous and kinesthetic cues for the sensation of softness. Factors such as contact area, depth, and force play a particularly critical role in sensations experienced at the fingertips. Yet, existing haptic technologies designed to explore this phenomenon are limited, as they often couple force and contact area, failing to provide a real-world experience of softness perception. Our research introduces the softness-rendering interface (SORI), a haptic softness display designed to bridge this knowledge gap. Unlike its predecessors, SORI has the unique ability to decouple contact area and force, thereby allowing for a quantitative representation of softness sensations at the fingertips. Furthermore, SORI incorporates individual physical fingertip properties and model-based softness cue estimation and mapping to provide a highly personalized experience. Utilizing this method, SORI quantitatively replicates the sensation of softness on stationary, dynamic, homogeneous, and heterogeneous surfaces. We demonstrate that SORI accurately renders the surfaces of both virtual and daily objects, thereby presenting opportunities across a range of fields, from teleoperation to medical technology. Finally, our proposed method and SORI will expedite psychological and neuroscience research to unlock the nature of softness perception.


Subject(s)
Touch Perception , Humans , Skin , Cues , Fingers , Touch , User-Computer Interface
20.
Sensors (Basel) ; 24(6)2024 Mar 10.
Article in English | MEDLINE | ID: mdl-38544043

ABSTRACT

This study employs Multiscale Entropy (MSE) to analyze 5020 binocular eye movement recordings from 407 college-aged participants, as part of the GazeBaseVR dataset, across various virtual reality (VR) tasks to understand the complexity of user interactions. By evaluating the vertical and horizontal components of eye movements across tasks such as vergence, smooth pursuit, video viewing, reading, and random saccade, collected at 250 Hz using an ET-enabled VR headset, this research provides insights into the predictability and complexity of gaze patterns. Participants were recorded up to six times over a 26-month period, offering a longitudinal perspective on eye movement behavior in VR. MSE's application in this context aims to offer a deeper understanding of user behavior in VR, highlighting potential avenues for interface optimization and user experience enhancement. The results suggest that MSE can be a valuable tool in creating more intuitive and immersive VR environments by adapting to users' gaze behaviors. This paper discusses the implications of these findings for the future of VR technology development, emphasizing the need for intuitive design and the potential for MSE to contribute to more personalized and comfortable VR experiences.


Subject(s)
Virtual Reality , Humans , Young Adult , Entropy , Eye Movements , Saccades , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...